20 research outputs found

    Measuring Defect Datasets Sensitivity to Attributes Variation

    Get PDF
    The study of the correlation between software project and product attributes and its modules quality status (faulty or not) is the subject of several research papers in the software testing and maintenance fields. In this paper, a tool is built to change the values of software data sets\u27 attributes and study the impact of this change on the modules\u27 defect status. The goal is to find those specific attributes that highly correlate with the module defect attribute. An algorithm is developed to automatically predict the module defect status based on the values of the module attributes and based on their change from reference or initial values. For each attribute of those software projects, results can show when such attribute can be, if any, a major player in deciding the defect status of the project or a specific module. Results showed consistent, and in some cases better, results in comparison with most surveyed defect prediction algorithms. Results showed also that this can be a very powerful method to understand each attribute individual impact, if any, to the module quality status and how it can be improved

    Approaches for Testing and Evaluation of XACML Policies

    Get PDF
    Security services are provided through: The applications, operating systems, databases, and the network. There are many proposals to use policies to define, implement and evaluate security services. We discussed a full test automation framework to test XACML based policies. Using policies as input the developed tool can generate test cases based on the policy and the general XACML model. We evaluated a large dataset of policy implementations. The collection includes more than 200 test cases that represent instances of policies. Policies are executed and verified, using requests and responses generated for each instance of policies. WSO2 platform is used to perform different testing activities on evaluated policies

    Activities and Trends in Testing Graphical User Interfaces Automatically

    Get PDF
    This study introduced some new approaches for software test automation in general and testing graphical user interfaces in particular. The study presented ideas in the different stages of the test automation framework. Test automation framework main activities include test case generation, execution and verification. Other umbrella activities include modeling, critical paths selection and some others. In modeling, a methodology is presented to transform the user interface of applications into XML (i.e., extensible Markup Language) files. The purpose of this intermediate transformation is to enable producing test automation components in a format that is easier to deal with (in terms of testing). Test cases are generated from this model, executed and verified on the actual implementation. The transformation of products\u27 Graphical User Interface (GUI) into XML files also enables the documentation and storage of the interface description. There are several cases where we need to have a stored documented format of the GUI. Having it in XML universal format, allows it to be retrieved and reused in other places. XML Files in their hierarchical structure make it possible and easy to preserve the hierarchical structure of the user interface. Several GUI structural metrics are also introduced to evaluate the user interface from testing perspectives. Those metrics can be collected automatically using the developed tool with no need for user intervention

    Enhance Rule Based Detection for Software Fault Prone Modules

    Get PDF
    Software quality assurance is necessary to increase the level of confidence in the developed software and reduce the overall cost for developing software projects. The problem addressed in this research is the prediction of fault prone modules using data mining techniques. Predicting fault prone modules allows the software managers to allocate more testing and resources to such modules. This can also imply a good investment in better design in future systems to avoid building error prone modules. Software quality models that are based upon data mining from previous projects can identify fault-prone modules in the current similar development project, once similarity between projects is established. In this paper, we applied different data mining rule-based classification techniques on several publicly available datasets of the NASA software repository (e.g. PC1, PC2, etc). The goal was to classify the software modules into either fault prone or not fault prone modules. The paper proposed a modification on the RIDOR algorithm on which the results show that the enhanced RIDOR algorithm is better than other classification techniques in terms of the number of extracted rules and accuracy. The implemented algorithm learns defect prediction using mining static code attributes. Those attributes are then used to present a new defect predictor with high accuracy and low error rate

    Evaluating Network Test Scenarios for Network Simulators Systems

    Get PDF
    Networks continue to grow as industries use both wired and wireless networks. Creating experiments to test those networks can be very expensive if conducted on production networks; therefore, the evaluation of networks and their performance is usually conducted using emulation. This growing reliance on simulation raises the risk of correctness and validation. Today, many network simulators have widely varying focuses and are employed in different fields of research. The trustworthiness of results produced from simulation models must be investigated. The goal of this work is first to compare and assess the performance of three prominent network simulators—NS-2, NS-3, and OMNet++—by considering the following qualitative characteristics: architectural design, correctness, performance, usability, features, and trends. Second, introduce the concept of mutation testing to design the appropriate network scenarios to be used for protocol evaluation. Many works still doubt if used scenarios can suit well to claim conclusions about protocol performance and effectiveness. A large-scale simulation model was implemented using ad hoc on-demand distance vector and destination-sequenced distance vector routing protocols to compare performance, correctness, and usability. This study addresses an interesting question about the validation process: “Are you building the right simulation model in the right environment?” In conclusion, network simulation alone cannot determine the correctness and usefulness of the implemented protocol. Software testing approaches should be considered to validate the quality of the network model and test scenarios being used

    Call Graph Based Metrics to Evaluate Software Design Quality

    Get PDF
    Software defects prediction was introduced to support development and maintenance activities such as improving the software quality through finding errors or patterns of errors early in the software development process. Software defects prediction is playing the role of maintenance facilitation in terms of effort, time and more importantly the cost prediction for software maintenance and evolution activities. In this research, software call graph model is used to evaluate its ability to predict quality related attributes in developed software products. As a case study, the call graph model is generated for several applications in order to represent and reflect the degree of their complexity, especially in terms of understandability, testability and maintenance efforts. This call graph model is then used to collect some software product attributes, and formulate several call graph based metrics. The extracted metrics are investigated in relation or correlation with bugs collected from customers-bug reports for the evaluated applications. Those software related bugs are compiled into dataset files to be used as an input to a data miner for classification, prediction and association analysis. Finally, the results of the analysis are evaluated in terms of finding the correlation between call graph based metrics and software products\u27 bugs. In this research, we assert that call graph based metrics are appropriate to be used to detect and predict software defects so the activities of maintenance and testing stages after the delivery become easier to estimate or assess

    Ensemble Models for Intrusion Detection System Classification

    Get PDF
    Using data analytics in the problem of Intrusion Detection and Prevention Systems (IDS/IPS) is a continuous research problem due to the evolutionary nature of the problem and the changes in major influencing factors. The main challenges in this area are designing rules that can predict malware in unknown territories and dealing with the complexity of the problem and the conflicting requirements regarding high accuracy of detection and high efficiency. In this scope, we evaluated the usage of state-of-the-art ensemble learning models in improving the performance and efficiency of IDS/IPS. We compared our approaches with other existing approaches using popular open-source datasets available in this area

    Clustering and Classification of Email Contents

    Get PDF
    Information users depend heavily on emails\u27 system as one of the major sources of communication. Its importance and usage are continuously growing despite the evolution of mobile applications, social networks, etc. Emails are used on both the personal and professional levels. They can be considered as official documents in communication among users. Emails\u27 data mining and analysis can be conducted for several purposes such as: Spam detection and classification, subject classification, etc. In this paper, a large set of personal emails is used for the purpose of folder and subject classifications. Algorithms are developed to perform clustering and classification for this large text collection. Classification based on NGram is shown to be the best for such large text collection especially as text is Bi-language (i.e. with English and Arabic content)

    Evaluation of Cost Estimation Metrics: Towards a Unified Terminology

    Get PDF
    Cost overrun of software projects is major cause of their failures. In order to facilitate accurate software cost estimation, there are several metrics, tools and datasets. In this paper, we evaluate and compare different metrics and datasets in terms of similarities and differences of involved software attributes. These metrics forecast project cost estimations based on different software attributes. Some of these metrics are public and standard while others are only employed in a particular metric tool/dataset.Sixteen public cost estimation datasets are collected and analyzed. Different perspectives are used to compare and classify those datasets. Tools for feature selection and classification are used to find the most important attributes in cost estimation datasets toward the goal of effort prediction. In order to have better estimation it is needed to correlate cost estimation from different resources, which requires a unified standard for software cost estimation metric tools and datasets. It is pertinent that a common cost estimation model may not work for each project due to diverse project size, application areas etc. We suggest having a standardized terminology of project attributes used for cost estimation. This would improve cost estimation as multiple metrics could be applied on a project without much additional effort.</p

    MQVC: Measuring Quranic Verses Similarity and Sura Classification Using N-Gram

    Get PDF
    Extensive research efforts in the area of Information Retrieval were concentrated on developing retrieval systems related to Arabic language for the different natural language and information retrieval methodologies. However, little effort was conducted in those areas for knowledge extraction from the Holly Muslim book, the Quran. In this paper, we present an approach (MQVC) for retrieving the most similar verses in comparison with a user input verse as a query. To demonstrate the accuracy of our approach, we performed a set of experiments and compared the results with an evaluation from a Quran Specialist who manually identified all relevant chapters and verses to the targeted verse in our study. The MQVC approach was applied to 70 out of 114 Quran chapters. We picked 40 verses randomly and calculated the precision to evaluate the accuracy of our approach. We utilized N-gram to extend the work by performing experiment with machine learning algorithm (LibSVM classifier in Weka), to classify Quran chapters based on the most common scholars classification: Makki and Madani chapters
    corecore